Augusta
- Europe > United Kingdom > England > Merseyside > Liverpool (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Georgia > Richmond County > Augusta (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Merseyside > Liverpool (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Georgia > Richmond County > Augusta (0.04)
A Closed-Form Framework for Schrödinger Bridges Between Arbitrary Densities
Score-based generative models have recently attracted significant attention for their ability to generate high-fidelity data by learning maps from simple Gaussian priors to complex data distributions. A natural generalization of this idea to transformations between arbitrary probability distributions leads to the Schrödinger Bridge (SB) problem. However, SB solutions rarely admit closed-form expressios and are commonly obtained through iterative stochastic simulation procedures, which are computationally intensive and can be unstable. In this work, we introduce a unified closed-form framework for representing the stochastic dynamics of SB systems. Our formulation subsumes previously known analytical solutions including the Schrödinger Föllmer process and the Gaussian SB as specific instances. Notably, the classical Gaussian SB solution, previously derived using substantially more sophisticated tools such as Riemannian geometry and generator theory, follows directly from our formulation as an immediate corollary. Leveraging this framework, we develop a simulation-free algorithm that infers SB dynamics directly from samples of the source and target distributions. We demonstrate the versatility of our approach in two settings: (i) modeling developmental trajectories in single-cell genomics and (ii) solving image restoration tasks such as inpainting and deblurring. This work opens a new direction for efficient and scalable nonlinear diffusion modeling across scientific and machine learning applications.
- North America > United States > Georgia > Richmond County > Augusta (0.14)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
Towards Effective Federated Graph Foundation Model via Mitigating Knowledge Entanglement
Zhu, Yinlin, Li, Xunkai, Jia, Jishuo, Hu, Miao, Wu, Di, Qiu, Meikang
Recent advances in graph machine learning have shifted to data-centric paradigms, driven by two emerging fields: (1) Federated graph learning (FGL) enables multi-client collaboration but faces challenges from data and task heterogeneity, limiting its practicality; (2) Graph foundation models (GFM) offer strong domain generalization but are usually trained on single machines, missing out on cross-silo data and resources. These paradigms are complementary, and their integration brings notable benefits. Motivated by this, we propose FedGFM, a novel decentralized GFM training paradigm. However, a key challenge is knowledge entanglement, where multi-domain knowledge merges into indistinguishable representations, hindering downstream adaptation. To address this, we present FedGFM+, an enhanced framework with two core modules to reduce knowledge entanglement: (1) AncDAI: A global anchor-based domain-aware initialization strategy. Before pre-training, each client encodes its local graph into domain-specific prototypes that serve as semantic anchors. Synthetic embeddings around these anchors initialize the global model. We theoretically prove these prototypes are distinguishable across domains, providing a strong inductive bias to disentangle domain-specific knowledge. (2) AdaDPP: A local adaptive domain-sensitive prompt pool. Each client learns a lightweight graph prompt capturing domain semantics during pre-training. During fine-tuning, prompts from all clients form a pool from which the GFM selects relevant prompts to augment target graph attributes, improving downstream adaptation. FedGFM+ is evaluated on 8 diverse benchmarks across multiple domains and tasks, outperforming 20 baselines from supervised learning, FGL, and federated GFM variants.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Georgia > Richmond County > Augusta (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Designing and Evaluating an AI-driven Immersive Multidisciplinary Simulation (AIMS) for Interprofessional Education
Wang, Ruijie, Lu, Jie, Pei, Bo, Jones, Evonne, Brinson, Jamey, Brown, Timothy
Interprofessional education has long relied on case studies and the use of standardized patients to support teamwork, communication, and related collaborative competencies among healthcare professionals. However, traditional approaches are often limited by cost, scalability, and inability to mimic the dynamic complexity of real-world clinical scenarios. To address these challenges, we designed and developed AIMS (AI-Enhanced Immersive Multidisciplinary Simulations), a virtual simulation that integrates a large language model (Gemini-2.5-Flash), a Unity-based virtual environment engine, and a character creation pipeline to support synchronized, multimodal interactions between the user and the virtual patient. AIMS was designed to enhance collaborative clinical reasoning and health promotion competencies among students from pharmacy, medicine, nursing, and social work. A formal usability testing session was conducted which participants assumed professional roles on a healthcare team and engaged in a mix of scripted and unscripted conversations. Participants explored the patient's symptoms, social context, and care needs. Usability issues were identified (e.g., audio routing, response latency) and used to guide subsequent refinements. Findings in general suggest that AIMS supports realistic, profession-specific and contextually appropriate conversations. We discussed both technical and pedagogical innovations of AIMS and concluded with future directions.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- (4 more...)
- Instructional Material (1.00)
- Research Report > Experimental Study (0.68)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Education > Educational Setting > Higher Education (0.47)
- Education > Educational Setting > Online (0.46)
- Europe > United Kingdom > England > Merseyside > Liverpool (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Georgia > Richmond County > Augusta (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Merseyside > Liverpool (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Georgia > Richmond County > Augusta (0.04)
A Survey of Foundation Models for IoT: Taxonomy and Criteria-Based Analysis
Wei, Hui, Lee, Dong Yoon, Rohal, Shubham, Hu, Zhizhang, Rossi, Ryan, Fang, Shiwei, Pan, Shijia
Foundation models have gained growing interest in the IoT domain due to their reduced reliance on labeled data and strong generalizability across tasks, which address key limitations of traditional machine learning approaches. However, most existing foundation model based methods are developed for specific IoT tasks, making it difficult to compare approaches across IoT domains and limiting guidance for applying them to new tasks. This survey aims to bridge this gap by providing a comprehensive overview of current methodologies and organizing them around four shared performance objectives by different domains: efficiency, context-awareness, safety, and security & privacy. For each objective, we review representative works, summarize commonly-used techniques and evaluation metrics. This objective-centric organization enables meaningful cross-domain comparisons and offers practical insights for selecting and designing foundation model based solutions for new IoT tasks. We conclude with key directions for future research to guide both practitioners and researchers in advancing the use of foundation models in IoT applications.
- North America > United States > California > Merced County > Merced (0.14)
- Asia > Nepal (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- (6 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Consumer Health (1.00)
- Government (1.00)
- (2 more...)
DARD: Dice Adversarial Robustness Distillation against Adversarial Attacks
Zou, Jing, Zhang, Shungeng, Qiu, Meikang, Li, Chong
Deep learning models are vulnerable to adversarial examples, posing critical security challenges in real-world applications. While Adversarial Training (AT ) is a widely adopted defense mechanism to enhance robustness, it often incurs a trade-off by degrading performance on unperturbed, natural data. Recent efforts have highlighted that larger models exhibit enhanced robustness over their smaller counterparts. In this paper, we empirically demonstrate that such robustness can be systematically distilled from large teacher models into compact student models. To achieve better performance, we introduce Dice Adversarial Robustness Distillation (DARD), a novel method designed to transfer robustness through a tailored knowledge distillation paradigm. Additionally, we propose Dice Projected Gradient Descent (DPGD), an adversarial example generalization method optimized for effective attack. Our extensive experiments demonstrate that the DARD approach consistently outperforms adversarially trained networks with the same architecture, achieving superior robustness and standard accuracy.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Singapore (0.05)
- Europe > Switzerland (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Promising Solution (0.66)
Alignment and Safety in Large Language Models: Safety Mechanisms, Training Paradigms, and Emerging Challenges
Lu, Haoran, Fang, Luyang, Zhang, Ruidong, Li, Xinliang, Cai, Jiazhang, Cheng, Huimin, Tang, Lin, Liu, Ziyu, Sun, Zeliang, Wang, Tao, Zhang, Yingchuan, Zidan, Arif Hassan, Xu, Jinwen, Yu, Jincheng, Yu, Meizhi, Jiang, Hanqi, Gong, Xilin, Luo, Weidi, Sun, Bolun, Chen, Yongkai, Ma, Terry, Wu, Shushan, Zhou, Yifan, Chen, Junhao, Xiang, Haotian, Zhang, Jing, Jahin, Afrar, Ruan, Wei, Deng, Ke, Pan, Yi, Wang, Peilong, Li, Jiahui, Liu, Zhengliang, Zhang, Lu, Zhao, Lin, Liu, Wei, Zhu, Dajiang, Xing, Xin, Dou, Fei, Zhang, Wei, Huang, Chao, Liu, Rongjie, Zhang, Mengrui, Liu, Yiwen, Sun, Xiaoxiao, Lu, Qin, Xiang, Zhen, Zhong, Wenxuan, Liu, Tianming, Ma, Ping
Due to the remarkable capabilities and growing impact of large language models (LLMs), they have been deeply integrated into many aspects of society. Thus, ensuring their alignment with human values and intentions has emerged as a critical challenge. This survey provides a comprehensive overview of practical alignment techniques, training protocols, and empirical findings in LLM alignment. We analyze the development of alignment methods across diverse paradigms, characterizing the fundamental trade-offs between core alignment objectives. Our analysis shows that while supervised fine-tuning enables basic instruction-following, preference-based methods offer more flexibility for aligning with nuanced human intent. We discuss state-of-the-art techniques, including Direct Preference Optimization (DPO), Constitutional AI, brain-inspired methods, and alignment uncertainty quantification (AUQ), highlighting their approaches to balancing quality and efficiency. We review existing evaluation frameworks and benchmarking datasets, emphasizing limitations such as reward misspecification, distributional robustness, and scalable oversight. We summarize strategies adopted by leading AI labs to illustrate the current state of practice. We conclude by outlining open problems in oversight, value pluralism, robustness, and continuous alignment. This survey aims to inform both researchers and practitioners navigating the evolving landscape of LLM alignment.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Arizona > Pima County > Tucson (0.14)
- Europe > Austria > Vienna (0.14)
- (28 more...)
- Research Report > Promising Solution (1.00)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government > Military (1.00)
- (7 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)